133 research outputs found

    Proceedings. 22. Workshop Computational Intelligence, Dortmund, 6. - 7. Dezember 2012

    Get PDF
    Dieser Tagungsband enthält die Beiträge des 22. Workshops "Computational Intelligence" des Fachausschusses 5.14 der VDI/VDE-Gesellschaft für Mess- und Automatisierungstechnik (GMA) der vom 6. - 7. Dezember 2012 in Dortmund stattgefunden hat. Die Schwerpunkte sind Methoden, Anwendungen und Tools für - Fuzzy-Systeme, - Künstliche Neuronale Netze, - Evolutionäre Algorithmen und - Data-Mining-Verfahren sowie der Methodenvergleich anhand von industriellen Anwendungen und Benchmark-Problemen

    Similarity Measure Development for Case-Based Reasoning- A Data-driven Approach

    Full text link
    In this paper, we demonstrate a data-driven methodology for modelling the local similarity measures of various attributes in a dataset. We analyse the spread in the numerical attributes and estimate their distribution using polynomial function to showcase an approach for deriving strong initial value ranges of numerical attributes and use a non-overlapping distribution for categorical attributes such that the entire similarity range [0,1] is utilized. We use an open source dataset for demonstrating modelling and development of the similarity measures and will present a case-based reasoning (CBR) system that can be used to search for the most relevant similar cases

    Proceedings. 24. Workshop Computational Intelligence, Dortmund, 27. - 28. November 2014

    Get PDF
    Dieser Tagungsband enthält die Beiträge des 24. Workshops "Computational Intelligence" des Fachausschusses 5.14 der VDI/VDE-Gesellschaft für Mess- und Automatisierungstechnik (GMA), der vom 27. - 28. November 2014 in Dortmund stattgefunden hat. Die Schwerpunkte sind Methoden, Anwendungen und Tools für Fuzzy-Systeme, Künstliche Neuronale Netze, Evolutionäre Algorithmen und Data-Mining-Verfahren sowie der Methodenvergleich anhand von industriellen Anwendungen und Benchmark-Problemen

    Proceedings. 27. Workshop Computational Intelligence, Dortmund, 23. - 24. November 2017

    Get PDF
    Dieser Tagungsband enthält die Beiträge des 27. Workshops Computational Intelligence. Die Schwerpunkte sind Methoden, Anwendungen und Tools für Fuzzy-Systeme, Künstliche Neuronale Netze, Evolutionäre Algorithmen und Data-Mining-Verfahren sowie der Methodenvergleich anhand von industriellen und Benchmark-Problemen

    Modelling fish habitat preference with a genetic algorithm-optimized Takagi-Sugeno model based on pairwise comparisons

    Get PDF
    Species-environment relationships are used for evaluating the current status of target species and the potential impact of natural or anthropogenic changes of their habitat. Recent researches reported that the results are strongly affected by the quality of a data set used. The present study attempted to apply pairwise comparisons to modelling fish habitat preference with Takagi-Sugeno-type fuzzy habitat preference models (FHPMs) optimized by a genetic algorithm (GA). The model was compared with the result obtained from the FHPM optimized based on mean squared error (MSE). Three independent data sets were used for training and testing of these models. The FHPMs based on pairwise comparison produced variable habitat preference curves from 20 different initial conditions in the GA. This could be partially ascribed to the optimization process and the regulations assigned. This case study demonstrates applicability and limitations of pairwise comparison-based optimization in an FHPM. Future research should focus on a more flexible learning process to make a good use of the advantages of pairwise comparisons

    Distance-based decision tree algorithms for label ranking

    Get PDF
    The problem of Label Ranking is receiving increasing attention from several research communities. The algorithms that have developed/adapted to treat rankings as the target object follow two different approaches: distribution-based (e.g., using Mallows model) or correlation-based (e.g., using Spearman’s rank correlation coefficient). Decision trees have been adapted for label ranking following both approaches. In this paper we evaluate an existing correlation-based approach and propose a new one, Entropy-based Ranking trees. We then compare and discuss the results with a distribution-based approach. The results clearly indicate that both approaches are competitive

    Possibilistic classifiers for numerical data

    Get PDF
    International audienceNaive Bayesian Classifiers, which rely on independence hypotheses, together with a normality assumption to estimate densities for numerical data, are known for their simplicity and their effectiveness. However, estimating densities, even under the normality assumption, may be problematic in case of poor data. In such a situation, possibility distributions may provide a more faithful representation of these data. Naive Possibilistic Classifiers (NPC), based on possibility theory, have been recently proposed as a counterpart of Bayesian classifiers to deal with classification tasks. There are only few works that treat possibilistic classification and most of existing NPC deal only with categorical attributes. This work focuses on the estimation of possibility distributions for continuous data. In this paper we investigate two kinds of possibilistic classifiers. The first one is derived from classical or flexible Bayesian classifiers by applying a probability–possibility transformation to Gaussian distributions, which introduces some further tolerance in the description of classes. The second one is based on a direct interpretation of data in possibilistic formats that exploit an idea of proximity between data values in different ways, which provides a less constrained representation of them. We show that possibilistic classifiers have a better capability to detect new instances for which the classification is ambiguous than Bayesian classifiers, where probabilities may be poorly estimated and illusorily precise. Moreover, we propose, in this case, an hybrid possibilistic classification approach based on a nearest-neighbour heuristics to improve the accuracy of the proposed possibilistic classifiers when the available information is insufficient to choose between classes. Possibilistic classifiers are compared with classical or flexible Bayesian classifiers on a collection of benchmarks databases. The experiments reported show the interest of possibilistic classifiers. In particular, flexible possibilistic classifiers perform well for data agreeing with the normality assumption, while proximity-based possibilistic classifiers outperform others in the other cases. The hybrid possibilistic classification exhibits a good ability for improving accuracy

    Multi-Target Prediction: A Unifying View on Problems and Methods

    Full text link
    Multi-target prediction (MTP) is concerned with the simultaneous prediction of multiple target variables of diverse type. Due to its enormous application potential, it has developed into an active and rapidly expanding research field that combines several subfields of machine learning, including multivariate regression, multi-label classification, multi-task learning, dyadic prediction, zero-shot learning, network inference, and matrix completion. In this paper, we present a unifying view on MTP problems and methods. First, we formally discuss commonalities and differences between existing MTP problems. To this end, we introduce a general framework that covers the above subfields as special cases. As a second contribution, we provide a structured overview of MTP methods. This is accomplished by identifying a number of key properties, which distinguish such methods and determine their suitability for different types of problems. Finally, we also discuss a few challenges for future research

    Combination of linear classifiers using score function -- analysis of possible combination strategies

    Full text link
    In this work, we addressed the issue of combining linear classifiers using their score functions. The value of the scoring function depends on the distance from the decision boundary. Two score functions have been tested and four different combination strategies were investigated. During the experimental study, the proposed approach was applied to the heterogeneous ensemble and it was compared to two reference methods -- majority voting and model averaging respectively. The comparison was made in terms of seven different quality criteria. The result shows that combination strategies based on simple average, and trimmed average are the best combination strategies of the geometrical combination

    Preference-Based CBR: A Search-Based Problem Solving Framework

    Full text link
    corecore